61 research outputs found

    Learning Discriminative Stein Kernel for SPD Matrices and Its Applications

    Full text link
    Stein kernel has recently shown promising performance on classifying images represented by symmetric positive definite (SPD) matrices. It evaluates the similarity between two SPD matrices through their eigenvalues. In this paper, we argue that directly using the original eigenvalues may be problematic because: i) Eigenvalue estimation becomes biased when the number of samples is inadequate, which may lead to unreliable kernel evaluation; ii) More importantly, eigenvalues only reflect the property of an individual SPD matrix. They are not necessarily optimal for computing Stein kernel when the goal is to discriminate different sets of SPD matrices. To address the two issues in one shot, we propose a discriminative Stein kernel, in which an extra parameter vector is defined to adjust the eigenvalues of the input SPD matrices. The optimal parameter values are sought by optimizing a proxy of classification performance. To show the generality of the proposed method, three different kernel learning criteria that are commonly used in the literature are employed respectively as a proxy. A comprehensive experimental study is conducted on a variety of image classification tasks to compare our proposed discriminative Stein kernel with the original Stein kernel and other commonly used methods for evaluating the similarity between SPD matrices. The experimental results demonstrate that, the discriminative Stein kernel can attain greater discrimination and better align with classification tasks by altering the eigenvalues. This makes it produce higher classification performance than the original Stein kernel and other commonly used methods.Comment: 13 page

    Subject-adaptive Integration of Multiple SICE Brain Networks with Different Sparsity

    Get PDF
    As a principled method for partial correlation estimation, sparse inverse covariance estimation (SICE) has been employed to model brain connectivity networks, which holds great promise for brain disease diagnosis. For each subject, the SICE method naturally leads to a set of connectivity networks with various sparsity. However, existing methods usually select a single network from them for classification and the discriminative power of this set of networks has not been fully exploited. This paper argues that the connectivity networks at different sparsity levels present complementary connectivity patterns and therefore they should be jointly considered to achieve high classification performance.In this paper, we propose a subject-adaptive method to integrate multiple SICE networks as a unified representation for classification. The integration weight is learned adaptively for each subject in order to endow the method with the flexibility in dealing with subject variations. Furthermore, to respect the manifold geometry of SICE networks, Stein kernel is employed to embed the manifold structure into a kernel-induced feature space, which allows a linear integration of SICE networks to be designed. The optimization of the integration weight and the classification of the integrated networks are performed via a sparse representation framework. Through our method, we provide a unified and effective network representation that is transparent to the sparsity level of SICE networks, and can be readily utilized for further medical analysis. Experimental study on ADHD and ADNI data sets demonstrates that the proposed integration method achieves notable improvement of classification performance in comparison with methods using a single sparsity level of SICE networks and other commonly used integration methods, such as Multiple Kernel Learning

    Construction of Ideological and Political Mixed Teaching in Higher Education under the Digital Transformation

    Get PDF
    This article focuses on the current demands for reform in ideological and political education in higher education, within the context of the digital information and communication era. Specifically, it proposes a plan for integrating ideological and political education into higher education courses using the widely-used blended learning mode in a digitally transformed environment. This plan aims to leverage digital teaching methods, such as the development of multimedia courseware, to enrich classroom teaching content, increase student engagement and learning outcomes, deepen students\u27 understanding and awareness, and enable them to effectively absorb a wealth of information in a limited time. By subtly linking the process of learning professional knowledge with their personal, social, and national development, this plan seeks to foster a professional education philosophy that cultivates "socialist successors.

    Beyond covariance: feature representation with nonlinear kernel matrices

    Get PDF
    Covariance matrix has recently received increasing attention in computer vision by leveraging Riemannian geometry of symmetric positive-definite (SPD) matrices. Originally proposed as a region descriptor, it has now been used as a generic representation in various recognition tasks. However, covariance matrix has shortcomings such as being prone to be singular, limited capability in modeling complicated feature relationship, and having a fixed form of representation. This paper argues that more appropriate SPD-matrix-based representations shall be explored to achieve better recognition. It proposes an open framework to use the kernel matrix over feature dimensions as a generic representation and discusses its properties and advantages. The proposed framework significantly elevates covariance representation to the unlimited opportunities provided by this new representation. Experimental study shows that this representation consistently outperforms its covariance counterpart on various visual recognition tasks. In particular, it achieves significant improvement on skeleton-based human action recognition, demonstrating the state-of-the-art performance over both the covariance and the existing non-covariance representations

    Depth-Based Subgraph Convolutional Neural Networks

    Get PDF
    This paper proposes a new graph convolutional neural network architecture based on a depth-based representation of graph structure deriving from quantum walks, which we refer to as the quantum-based subgraph convolutional neural network (QS-CNNs). This new architecture captures both the global topological structure and the local connectivity structure within a graph. Specifically, we commence by establishing a family of K-layer expansion subgraphs for each vertex of a graph by quantum walks, which captures the global topological arrangement information for substructures contained within a graph. We then design a set of fixed-size convolution filters over the subgraphs, which helps to characterise multi-scale patterns residing in the data. The idea is to apply convolution filters sliding over the entire set of subgraphs rooted at a vertex to extract the local features analogous to the standard convolution operation on grid data. Experiments on eight graph-structured datasets demonstrate that QS-CNNs architecture is capable of outperforming fourteen state-of-the-art methods for the tasks of node classification and graph classification

    Two-and-a-half Order Score-based Model for Solving 3D Ill-posed Inverse Problems

    Full text link
    Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are crucial technologies in the field of medical imaging. Score-based models have proven to be effective in addressing different inverse problems encountered in CT and MRI, such as sparse-view CT and fast MRI reconstruction. However, these models face challenges in achieving accurate three dimensional (3D) volumetric reconstruction. The existing score-based models primarily focus on reconstructing two dimensional (2D) data distribution, leading to inconsistencies between adjacent slices in the reconstructed 3D volumetric images. To overcome this limitation, we propose a novel two-and-a-half order score-based model (TOSM). During the training phase, our TOSM learns data distributions in 2D space, which reduces the complexity of training compared to directly working on 3D volumes. However, in the reconstruction phase, the TOSM updates the data distribution in 3D space, utilizing complementary scores along three directions (sagittal, coronal, and transaxial) to achieve a more precise reconstruction. The development of TOSM is built on robust theoretical principles, ensuring its reliability and efficacy. Through extensive experimentation on large-scale sparse-view CT and fast MRI datasets, our method demonstrates remarkable advancements and attains state-of-the-art results in solving 3D ill-posed inverse problems. Notably, the proposed TOSM effectively addresses the inter-slice inconsistency issue, resulting in high-quality 3D volumetric reconstruction.Comment: 10 pages, 13 figure

    Functional Brain Network Classification With Compact Representation of SICE Matrices

    Full text link

    Developing advanced methods for covariance representations in computer vision

    Get PDF
    Computer vision aims at producing numerical or symbolic information, e.g., decisions, by acquiring, processing, analyzing and understanding images or other high-dimensional data. In many contexts of computer vision, the data are represented by or converted to covariance-based representations, including covariance descriptor and sparse inverse covariance estimation (SICE), due to their desirable properties. While enjoying beneficial properties, covariance representations also bring challenges. Both covariance descriptor and SICE matrix belong to the set of symmetric positive-definite (SPD) matrices which form a Riemannian manifold in a Euclidean space. As a consequence of this special geometrical structure, many learning algorithms which are developed in Euclidean spaces cannot be directly applied to covariance representations because they do not take such geometrical structure into consideration. However, the increasingly wider applications of covariance representations in computer vision tasks urge the need for advanced methods to process or analyze covariance representations. This thesis aims to develop advanced learning methods for covariance representations in computer vision. This goal is achieved from four perspectives. 1) This thesis first proposes a novel kernel function, discriminative Stein kernel (DSK), for covariance descriptor. DSK is learned in a supervised manner through eigenvalue adjustment. 2) Then this thesis pushes forward the application of covariance representations. This thesis finds that the high dimensionality of SICE matrices can adversely affect the classification performance. To address this issue, this thesis uses SPD-kernel PCA to extract principal components to obtain a compact and informative representation for classification. 3) In order to fully utilize the complementary information in SICE matrices at multiple sparsity levels, this thesis develops a subject-adaptive integration of SICE matrices for joint representation and classification. 4) Furthermore, considering the issues encountered by covariance descriptor in dealing with the case of high feature dimensionality and small sample size, this thesis generalizes covariance descriptor with a kernel matrix over feature dimensions. By doing this, the fixed form of covariance descriptor is extended to an open framework of kernel matrix based representations
    corecore